Goto

Collaborating Authors

 instance-based generalization


Instance-based Generalization in Reinforcement Learning

Neural Information Processing Systems

Agents trained via deep reinforcement learning (RL) routinely fail to generalize to unseen environments, even when these share the same underlying dynamics as the training levels. Understanding the generalization properties of RL is one of the challenges of modern machine learning. Towards this goal, we analyze policy learning in the context of Partially Observable Markov Decision Processes (POMDPs) and formalize the dynamics of training levels as instances. We prove that, independently of the exploration strategy, reusing instances introduces significant changes on the effective Markov dynamics the agent observes during training. Maximizing expected rewards impacts the learned belief state of the agent by inducing undesired instance-specific speed-running policies instead of generalizable ones, which are sub-optimal on the training set. We provide generalization bounds to the value gap in train and test environments based on the number of training instances, and use insights based on these to improve performance on unseen levels. We propose training a shared belief representation over an ensemble of specialized policies, from which we compute a consensus policy that is used for data collection, disallowing instance-specific exploitation. We experimentally validate our theory, observations, and the proposed computational solution over the CoinRun benchmark.


Review for NeurIPS paper: Instance-based Generalization in Reinforcement Learning

Neural Information Processing Systems

Weaknesses: The paper lacks many intricate details that prevents the reader to judge the novelty and full contribution of the work. After reading the rebuttal, an overview of the proposed solution and the problem setting would be of much help to the readers. Is the entire game (with all levels) considered as a POMDP? I see sentences such as "Line 62: environment is considered as a markov process". How is the generalization problem being modelled?


Review for NeurIPS paper: Instance-based Generalization in Reinforcement Learning

Neural Information Processing Systems

The paper addresses the problem of generalization in POMDPs, and all reviewers agreed that it contains clever ideas which are well evaluated, and so it makes a good contribution. The reviewers also agreed that there are presentation problems that the authors should fix, but that these can be handled in a revision. Hence, I recommend acceptance and very strongly encourage the authors to revise the paper and improve the writing taking into account the detailed comments in the reviews.


Instance-based Generalization in Reinforcement Learning

Neural Information Processing Systems

Agents trained via deep reinforcement learning (RL) routinely fail to generalize to unseen environments, even when these share the same underlying dynamics as the training levels. Understanding the generalization properties of RL is one of the challenges of modern machine learning. Towards this goal, we analyze policy learning in the context of Partially Observable Markov Decision Processes (POMDPs) and formalize the dynamics of training levels as instances. We prove that, independently of the exploration strategy, reusing instances introduces significant changes on the effective Markov dynamics the agent observes during training. Maximizing expected rewards impacts the learned belief state of the agent by inducing undesired instance-specific speed-running policies instead of generalizable ones, which are sub-optimal on the training set.